Search results for "Activation function"

showing 10 items of 10 documents

Optimal implementation of neural activation functions in programmable logic using fuzzy logic

2006

Abstract This work presents a methodology for implementing neural activation function in programmable logic using tools from fuzzy logic. This methodology will allow implementing these intrinsic non-linear functions using comparators and simple linear modellers, easily implemented in programmable logic. This work is particularized to the case of a hyperbolic tangent, the most common function in neural models, showing the excellent results yielded with the proposed approximation.

Sequential logicFunction block diagramNeuro-fuzzyArtificial neural networkComputer scienceCircuit designActivation functionLogic familyControl engineeringComplex programmable logic deviceFuzzy logicProgrammable logic arrayFuzzy electronicsProgrammable logic deviceLogic synthesisSimple programmable logic deviceLogic optimizationRegister-transfer levelIFAC Proceedings Volumes
researchProduct

Efficient MLP Digital Implementation on FPGA

2005

The efficiency and the accuracy of a digital feed-forward neural networks must be optimized to obtain both high classification rate and minimum area on chip. In this paper an efficient MLP digital implementation. The key features of the hardware implementation are the virtual neuron based architecture and the use of the sinusoidal activation function for the hidden layer. The effectiveness of the proposed solutions has been evaluated developing different FPGA based neural prototypes for the High Energy Physics domain and the automatic Road Sign Recognition domain. The use of the sinusoidal activation function decreases hardware resource employment of about 32% when compared with the standar…

Artificial neural networkbusiness.industryComputer scienceActivation functionField programmable gate arrays (FPGA)Sigmoid functionartificial neuralMachine learningcomputer.software_genreTransfer functionDomain (software engineering)Feedforward neural networkSystem on a chipArtificial intelligencebusinessField-programmable gate arraycomputerComputer hardwareNeural networks
researchProduct

A Novel Systolic Parallel Hardware Architecture for the FPGA Acceleration of Feedforward Neural Networks

2019

New chips for machine learning applications appear, they are tuned for a specific topology, being efficient by using highly parallel designs at the cost of high power or large complex devices. However, the computational demands of deep neural networks require flexible and efficient hardware architectures able to fit different applications, neural network types, number of inputs, outputs, layers, and units in each layer, making the migration from software to hardware easy. This paper describes novel hardware implementing any feedforward neural network (FFNN): multilayer perceptron, autoencoder, and logistic regression. The architecture admits an arbitrary input and output number, units in la…

Hardware architectureFloating pointGeneral Computer ScienceArtificial neural networkComputer scienceClock rateActivation functionGeneral EngineeringSistemes informàticsAutoencoderArquitectura d'ordinadorsComputational scienceneural network accelerationFPGA implementationdeep neural networksMultilayer perceptronFeedforward neural networks - FFNNFeedforward neural networkXarxes neuronals (Informàtica)General Materials Sciencelcsh:Electrical engineering. Electronics. Nuclear engineeringlcsh:TK1-9971systolic hardware architectureIEEE Access
researchProduct

A NEURAL NETWORK PRIMER

1994

Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in paral lel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticia…

Radial basis function networkTheoretical computer scienceEcologyLiquid state machineComputer scienceTime delay neural networkApplied MathematicsActivation functionGeneral MedicineTopologyAgricultural and Biological Sciences (miscellaneous)Hopfield networkRecurrent neural networkMultilayer perceptronTypes of artificial neural networksJournal of Biological Systems
researchProduct

Using the Hermite Regression Formula to Design a Neural Architecture with Automatic Learning of the “Hidden” Activation Functions

2000

The value of the output function gradient of a neural network, calculated in the training points, plays an essential role for its generalization capability. In this paper a feed forward neural architecture (αNet) that can learn the activation function of its hidden units during the training phase is presented. The automatic learning is obtained through the joint use of the Hermite regression formula and the CGD optimization algorithm with the Powell restart conditions. This technique leads to a smooth output function of αNet in the nearby of the training points, achieving an improvement of the generalization capability and the flexibility of the neural architecture. Experimental results, ob…

Flexibility (engineering)Hermite polynomialsArtificial neural networkComputer scienceGeneralizationbusiness.industryActivation functionFunction (mathematics)Sigmoid functionArtificial intelligencebusinessAlgorithmRegression
researchProduct

Periodic orbits of single neuron models with internal decay rate 0 < β ≤ 1

2013

In this paper we consider a discrete dynamical system x n+1=βx n – g(x n ), n=0,1,..., arising as a discrete-time network of a single neuron, where 0 &lt; β ≤ 1 is an internal decay rate, g is a signal function. A great deal of work has been done when the signal function is a sigmoid function. However, a signal function of McCulloch-Pitts nonlinearity described with a piecewise constant function is also useful in the modelling of neural networks. We investigate a more complicated step signal function (function that is similar to the sigmoid function) and we will prove some results about the periodicity of solutions of the considered difference equation. These results show the complexity of …

Quantitative Biology::Neurons and CognitionMathematical analysisActivation functionSigmoid functionstabilitySingle-valued functiondynamical systemError functionsymbols.namesakefixed pointModeling and SimulationMittag-Leffler functionStep functioniterative processsymbolsPiecewiseQA1-939nonlinear problemConstant functionAnalysisMathematicsMathematicsMathematical Modelling and Analysis
researchProduct

Efficient pruning of multilayer perceptrons using a fuzzy sigmoid activation function

2006

This Letter presents a simple and powerful pruning method for multilayer feed forward neural networks based on the fuzzy sigmoid activation function presented in [E. Soria, J. Martin, G. Camps, A. Serrano, J. Calpe, L. Gomez, A low-complexity fuzzy activation function for artificial neural networks, IEEE Trans. Neural Networks 14(6) (2003) 1576-1579]. Successful performance is obtained in standard function approximation and channel equalization problems. Pruning allows to reduce network complexity considerably, achieving a similar performance to that obtained by unpruned networks.

Artificial neural networkComputer sciencebusiness.industryTime delay neural networkCognitive NeuroscienceActivation functionRectifier (neural networks)PerceptronFuzzy logicComputer Science ApplicationsArtificial IntelligenceMultilayer perceptronFeedforward neural networkPruning (decision trees)Artificial intelligenceTypes of artificial neural networksbusinessNeurocomputing
researchProduct

A neural multi-agent based system for smart html pages retrieval

2003

A neural based multi-agent system for smart HTML page retrieval is presented. The system is based on the EalphaNet architecture, a neural network capable of learning the activation function of its hidden units and having good generalization capabilities. System goal is to retrieve documents satisfying a query and dealing with a specific topic. The system has been developed using the basic features supplied by the Jade platform for agent creation, coordination and control. The system is composed of four agents: the trainer agent, the neural classifier mobile agent, the interface agent, and the librarian agent. The sub-symbolic knowledge of the neural classifier mobile agent is automatically …

Search engineArtificial neural networkComputer scienceMulti-agent systemActivation functionMobile agentData miningDocument retrievalDigital librarycomputer.software_genrecomputerClassifier (UML)
researchProduct

A Concurrent Neural Classifier for HTML Documents Retrieval

2003

A neural based multi-agent system for automatic HTML pages retrieval is presented. The system is based on the EαNet architecture, a neural network having good generalization capabilities and able to learn the activation function of its hidden units. The starting hypothesis is that the HTML pages are stored in networked repositories. The system goal is to retrieve documents satisfying a user query and belonging to a given class (i.e. documents containing the word “football” and talking about “Sports”). The system is composed by three interacting agents: the EαNet Neural Classifier Mobile Agent, the Query Agent, and the Locator Agent. The whole system was successfully implemented exploiting t…

Network architectureArtificial neural networkComputer sciencebusiness.industryActivation functionHTMLMachine learningcomputer.software_genreMobile agentArtificial intelligenceDocument retrievalbusinesscomputerClassifier (UML)computer.programming_language
researchProduct

Hyper-flexible Convolutional Neural Networks based on Generalized Lehmer and Power Means

2022

Convolutional Neural Network is one of the famous members of the deep learning family of neural network architectures, which is used for many purposes, including image classification. In spite of the wide adoption, such networks are known to be highly tuned to the training data (samples representing a particular problem), and they are poorly reusable to address new problems. One way to change this would be, in addition to trainable weights, to apply trainable parameters of the mathematical functions, which simulate various neural computations within such networks. In this way, we may distinguish between the narrowly focused task-specific parameters (weights) and more generic capability-spec…

neural networkCognitive NeuroscienceLehmer meansyväoppiminenneuroverkotMachine LearningflexibilitykoneoppiminenPower meanArtificial Intelligenceconvolutionadversarial robustnesspoolingNeural Networks Computeractivation functionconvolutionalgeneralization
researchProduct